About ROS

Below you will find information about all the ROS packages, nodes, topics used in this project.

Terminology

ROS (robot operating system): a collection of software frameworks for robot software development. It provides services designed for hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management.

ROS Nodes a process that performs computations. Nodes are combined together into a graph and communicate with one another using streaming topics, RPC services, and the Parameter Server.

Packages & Nodes

Here is a list of packages. Underneath each package are nodes in that package.

simulation

The major purpose of the simulation package is to connect our self-driving system to CARLA simulator. To run the package, please refer to the documentation [here](./src/simulation/README.md).

The simulation package can also run simulated camera inputs using the camera_sim_node

Nodes:

$ carla_client
$ camera_sim_node

Launch Files:

$ carla_client.launch
$ carla_client_with_rviz.launch
$ carla_client_with_rqt.launch
$ start_camera_sim.launch

autopilot

The autopilot package is the brain of the self-driving car. It uses end-to-end deep learning to predict the steering, acceleration and braking commands of the vehicle. while subscribes to the camera feed. (Node currently functioning) The Arduino subsribes to the steering_cmds and controls the steering accordingly.

Nodes:

$ autopilot
$ visualization

Publishes (the autopilot node):

$ /vehicle/dbw/steering_cmds/
$ /vehicle/dbw/cruise_cmds/

Subscribes (all nodes):

$ /cv_camera_node/image_raw
$ /cv_camera_node/image_sim

object_detection

YOLO (You Only Look Once) realtime object detection system.

Nodes:

$ object_detection_node

Publishes:

$ /detection/object/detection_visualization/
$ /detection/object/detection_result

Subscribes:

$ /cv_camera_node/image_raw

segmentation

Semantic segmentation node. Deep learning, ConvNets

Nodes:

$ segmentation_node

Publishes:

$ /segmentation/visualization/
$ /segmentation/output

Subscribes:

$ /cv_camera_node/image_raw

cv_camera

The cameras are the main sensors of the self-driving car.

Nodes:

$ cv_camera_node

Publishes:

$ /cv_camera_node/image_raw

driver

This is the main package of the project. It pulls together all the individual nodes to create a complete self-driving system.

**Nodes: $ drive

gps

Used for localization. Currently using the Adafruit GPS module, serial communication.

Nodes:: $ gps_receiver $ nmea_topic_driver $ nmea_topic_serial_reader

The GPS package manages and publishes the data received from a GPS module connected via serial. The package

Publishes:

$ /sensor/gps/fix
$ /sensor/gps/vel

osm_cartography

Nodes:

$ osm_client
$ osm_server
$ viz_osm

This package broadcasts and processes .osm files. OSM files are OpenStreetMap files which contain detailed information about the environment, such as coordinates of roads, building and landmarks. Currently, the main function of the package is to broadcast the osm info to rviz for visualization. (Node currently functioning)

ROS Topics for visualization

$ /visual/steering/angle_img
$ /visual/detection/object/bbox_img
$ /visual/detection/lane/marking_img
$ /visual/segmentation/seg_img